用皮肤镜图像进行深度学习的黑色素瘤分类最近显示出在自动早期黑色素瘤诊断中的巨大潜力。然而,受到明显的数据失衡和明显的外部伪影的限制,即头发和尺子标记,从皮肤镜图像中提取的判别特征提取非常具有挑战性。在这项研究中,我们试图分别解决这些问题,以更好地表示病变特征。具体而言,基于GAN的数据增强(GDA)策略可与拟议的隐式脱糖(IHD)策略一起生成合成黑色素瘤阳性图像。其中,与头发相关的表示通过辅助分类器网络隐式分散,并反向发送到黑色素瘤 - 特征提取主链,以提供更好的黑色素瘤特异性表示学习。此外,为了训练IHD模块,头发的噪音还标记在ISIC2020数据集上,这使其成为第一个带有类似头发伪影的注释的大型皮肤镜数据集。广泛的实验证明了所提出的框架的优势以及每个组件的有效性。改进的数据集可在https://github.com/kirtsy/dermoscopicdataset上公开可用。
translated by 谷歌翻译
最近,深度神经网络具有极大的高级无效磁共振图像(MRI)重建,其中大多数研究都遵循单个解剖学中的一个网络时尚,即每个专家网络都经过训练和评估特定解剖结构。除了培训多个独立模型的效率低下之外,此类公约还忽略了各种解剖学的共享脱张知识,这些知识可以彼此受益。为了探索共享知识,一种天真的方法是将来自各种解剖学的所有数据结合起来,以训练全能网络。不幸的是,尽管存在共同的脱氧知识,但我们透露,不同解剖学的独家知识可能会恶化特定的重建目标,从而导致整体绩效降低。在这项研究中观察到这一点,我们提出了一个新型的深MRI重建框架,并具有解剖结构和解剖学特异性的参数化学习者,旨在“寻求共同点,同时解决不同的解剖学差异”。尤其是主要的解剖学共享的学习者是暴露于不同的解剖学上,以模拟蓬勃发展的共同知识,而有效的解剖学特异性学习者则接受了目标解剖结构的培训,以进行独家知识。在两个MRI重建网络中,在我们的框架顶部介绍并探索了四个不同的解剖学学习者实现。关于大脑,膝盖和心脏MRI数据集的全面实验表明,其中三个学习者能够通过多种解剖学协作学习来增强重建性能。
translated by 谷歌翻译
Variational autoencoder (VAE) is a popular method for drug discovery and there had been a great deal of architectures and pipelines proposed to improve its performance. But the VAE model itself suffers from deficiencies such as poor manifold recovery when data lie on low-dimensional manifold embedded in higher dimensional ambient space and they manifest themselves in each applications differently. The consequences of it in drug discovery is somewhat under-explored. In this paper, we study how to improve the similarity of the data generated via VAE and the training dataset by improving manifold recovery via a 2-stage VAE where the second stage VAE is trained on the latent space of the first one. We experimentally evaluated our approach using the ChEMBL dataset as well as a polymer datasets. In both dataset, the 2-stage VAE method is able to improve the property statistics significantly from a pre-existing method.
translated by 谷歌翻译
变形AutoEncoders(VAES)是最常用的生成模型之一,特别是对于图像数据。训练VAE中的突出困难是在低维歧管上支持的数据。戴伊和WIPF(2019年)的最新工作表明,在低维数据上,发电机将收敛到具有0方差的解决方案,该方案被正确地支持地面真相歧管。在本文中,通过组合理论和经验结果,我们表明故事更加微妙。正是,我们表明,对于线性编码器/解码器,故事大多是真实的,VAE训练确实恢复了一个等于地面真理歧管的支撑的发电机,但这是由于梯度下降的隐含偏差而不是仅仅是vae损失本身。在非线性案例中,我们表明VAE训练经常学习更高度的歧管,这是地面真相歧管的超集。
translated by 谷歌翻译
联合学习(FL)已出现联合列车在IOT中具有分布式数据集的模型,同时避免对中央数据收集的需求。由于观察范围有限,这种数据集只能反映当地信息,这限制了训练型的型号的质量。在实际网络中,全球信息和本地观察总是共存,这需要联合考虑学习做出合理的政策。然而,在分布式客户端中的水平流域中,中央代理机构仅作为模型聚合器,而不利用其全局功能进一步改进模型。这可能在很大程度上降低了一些任务中的性能,例如流量预测,其中全球信息明显提高了准确性。同时,这种全局特征可能不会直接发送给用于数据安全的代理。然后如何利用居住在中央机构的全球观察,同时保护其安全升起作为FL中的重要问题。在本文中,我们开发了垂直横向联合学习(VHFL)过程,其中全局特征在没有额外通信轮的过程中与代理共享代理。考虑到延迟和数据包丢失,我们分析了网络系统的收敛性并通过实验验证了其性能。建议的VHFL可以提高与水平FL相比的准确性,同时保护全球数据的安全性。
translated by 谷歌翻译
随着深度学习(DL)的发展,自然语言处理(NLP)使我们可以分析和理解大量语言文本。因此,在NLP的帮助下,我们可以在联合语义源和噪声频道上进行联合语义源和信道进行语义通信。然而,实现这一目标的现有方法是使用NLP的固定变压器,同时忽略每个句子中包含的语义信息的差异。为了解决这个问题,我们提出了一种基于通用变压器的新语义通信系统。与传统变压器相比,在通用变压器中引入了自适应循环机制。通过引入循环机制,新的语义通信系统可以更灵活地传输具有不同语义信息的句子,并在各种信道条件下实现更好的端到端性能。
translated by 谷歌翻译
In this work we study statistical properties of graph-based algorithms for multi-manifold clustering (MMC). In MMC the goal is to retrieve the multi-manifold structure underlying a given Euclidean data set when this one is assumed to be obtained by sampling a distribution on a union of manifolds $\mathcal{M} = \mathcal{M}_1 \cup\dots \cup \mathcal{M}_N$ that may intersect with each other and that may have different dimensions. We investigate sufficient conditions that similarity graphs on data sets must satisfy in order for their corresponding graph Laplacians to capture the right geometric information to solve the MMC problem. Precisely, we provide high probability error bounds for the spectral approximation of a tensorized Laplacian on $\mathcal{M}$ with a suitable graph Laplacian built from the observations; the recovered tensorized Laplacian contains all geometric information of all the individual underlying manifolds. We provide an example of a family of similarity graphs, which we call annular proximity graphs with angle constraints, satisfying these sufficient conditions. We contrast our family of graphs with other constructions in the literature based on the alignment of tangent planes. Extensive numerical experiments expand the insights that our theory provides on the MMC problem.
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译